597 research outputs found

    Human error analysis: Review of past accidents and implications for improving robustness of system design

    Get PDF
    Since the establishment of the high-technology industry and industrial systems, developments of new materials and fabrication techniques, associated with cutting-edge structural and engineering assessments, are contributing to more reliable and consistent systems, thus reducing the likelihood of losses. However, recent accidents are acknowledged to be linked to human factors which led to catastrophic consequences. Therefore, the understanding of human behavioural characteristics interlaced with the actual technology aspects and organisational context is of paramount importance for the safety & reliability field. This study first approaches this multidisciplinary problem by classifying and reviewing 200 major accident data from insurance companies and regulatory authorities under the Cognitive Reliability and Error Analysis framework. Then, specific attention is dedicated to discuss the implications for improving robustness of system design and tackling the surrounding factors and tendencies that could lead to the manifestation of human errors

    Learning from accidents: Investigating the genesis of human errors in multi-attribute settings to improve the organisation of design

    Get PDF
    Remarkable advances in engineering and system controls in recent times and the consequent development of state-of-the-art technologies are clearly resulting in economic, environmental and safety benefits to the society. Latest disasters, however, put human error in the glare of the media spotlight. The February 2016 train collision in southern Bavaria, Germany, which took 11 lives and left more than 90 people injured, is one of several examples where human errors appear to have played a significant role in a major accident. In this emblematic case, the railway system had multiple safety barriers in place, such as an automatic braking system if a train crosses a stop signal, but the track controller had reportedly disabled it. When he realised the error and tried to warn the drivers, it was too late (BBC, 2016)

    Learning from major accidents to improve system design

    Get PDF
    © 2015 Elsevier Ltd.Despite the massive developments in new technologies, materials and industrial systems, notably supported by advanced structural and risk control assessments, recent major accidents are challenging the practicality and effectiveness of risk control measures designed to improve reliability and reduce the likelihood of losses. Contemporary investigations of accidents occurred in high-technology systems highlighted the connection between human-related issues and major events, which led to catastrophic consequences. Consequently, the understanding of human behavioural characteristics interlaced with current technology aspects and organisational context seems to be of paramount importance for the safety & reliability field. First, significant drawbacks related to the human performance data collection will be minimised by the development of a novel industrial accidents dataset, the Multi-attribute Technological Accidents Dataset (MATA-D), which groups 238 major accidents from different industrial backgrounds and classifies them under a common framework (the Contextual Control Model used as basis for the Cognitive Reliability and Error Analysis Method). The accidents collection and the detailed interpretation will provide a rich data source, enabling the usage of integrated information to generate input to design improvement schemes. Then, implications to improve robustness of system design and tackle the surrounding factors and tendencies that could lead to the manifestation of human errors will be effectively addressed

    Postmortem tissue distribution of morphine and its metabolites in a series of heroin related deaths

    Get PDF
    The abuse of heroin (diamorphine) and heroin deaths are growing around the world. The interpretation of the toxicological results from suspected heroin deaths is notoriously difficult especially in cases where there may be limited samples. In order to help forensic practitioners with heroin interpretation we determined the concentration of morphine (M), morphine‐3‐glucuronide (M3G) and morphine‐6‐glucuronide (M6G) in blood (femoral and cardiac), brain (thalamus), liver (deep right lobe), bone marrow (sternum), skeletal muscle (psoas) and vitreous humor in 44 heroin related deaths. The presence of 6‐monoacetylmorphine (6‐MAM) in any of the postmortem samples was used as confirmation of heroin use. Quantitation was carried out using a validated LC‐MS/MS method with solid phase extraction. We also determined the presence of papaverine, noscapine and codeine in the samples, substances often found in illicit heroin and that may help determine illicit heroin use. The results of this study show that vitreous is the best sample to detect 6‐MAM (100% of cases), and thus heroin use. The results of the M, M3G and M6G quantitation in this study allow a degree of interpretation when samples are limited. However in some cases it may not be possible to determine heroin/morphine use as in 4 cases in muscle (3 cases in bone marrow) no morphine, morphine‐3‐glucuronide or morphine‐6‐glucuronide was detected, even though they were detected in other case samples. As always postmortem cases of suspected morphine/heroin intoxication should be interpreted with care and with as much case knowledge as possible

    A Hybrid Approach to Causality Analysis

    Get PDF
    In component-based safety-critical systems, when a system safety property is violated, it is necessary to analyze which components are the cause. Given a system execution trace that exhibits component faults leading to a property violation, our causality analysis formalizes a notion of counterfactual reasoning (\what would the system behavior be if a component had been correct? ) and algorithmically derives such alternative system behaviors, without re-executing the system itself. In this paper, we show that we can improve precision of the analysis if 1) we can emulate execution of components instead of relying on their contracts, and 2) take into consideration input/output dependencies between components to avoid blaming components for faults induced by other components. We demonstrate the utility of the extended analysis with a case study for a closed-loop patient-controlled analgesia system

    Responsibility Analysis by Abstract Interpretation

    Full text link
    Given a behavior of interest in the program, statically determining the corresponding responsible entity is a task of critical importance, especially in program security. Classical static analysis techniques (e.g. dependency analysis, taint analysis, slicing, etc.) assist programmers in narrowing down the scope of responsibility, but none of them can explicitly identify the responsible entity. Meanwhile, the causality analysis is generally not pertinent for analyzing programs, and the structural equations model (SEM) of actual causality misses some information inherent in programs, making its analysis on programs imprecise. In this paper, a novel definition of responsibility based on the abstraction of event trace semantics is proposed, which can be applied in program security and other scientific fields. Briefly speaking, an entity ER is responsible for behavior B, if and only if ER is free to choose its input value, and such a choice is the first one that ensures the occurrence of B in the forthcoming execution. Compared to current analysis methods, the responsibility analysis is more precise. In addition, our definition of responsibility takes into account the cognizance of the observer, which, to the best of our knowledge, is a new innovative idea in program analysis.Comment: This is the extended version (33 pages) of a paper to be appeared in the Static Analysis Symposium (SAS) 201
    corecore